509 research outputs found

    Scheduling multiple divisible loads on a linear processor network

    Get PDF
    Min, Veeravalli, and Barlas have recently proposed strategies to minimize the overall execution time of one or several divisible loads on a heterogeneous linear network, using one or more installments. We show on a very simple example that their approach does not always produce a solution and that, when it does, the solution is often suboptimal. We also show how to find an optimal schedule for any instance, once the number of installments per load is given. Then, we formally state that any optimal schedule has an infinite number of installments under a linear cost model as the one assumed in the original papers. Therefore, such a cost model cannot be used to design practical multi-installment strategies. Finally, through extensive simulations we confirmed that the best solution is always produced by the linear programming approach, while solutions of the original papers can be far away from the optimal

    Current fluctuations in systems with diffusive dynamics, in and out of equilibrium

    Get PDF
    For diffusive systems that can be described by fluctuating hydrodynamics and by the Macroscopic Fluctuation Theory of Bertini et al., the total current fluctuations display universal features when the system is closed and in equilibrium. When the system is taken out of equilibrium by a boundary-drive, current fluctuations, at least for a particular family of diffusive systems, display the same universal features as in equilibrium. To achieve this result, we exploit a mapping between the fluctuations in a boundary-driven nonequilibrium system and those in its equilibrium counterpart. Finally, we prove, for two well-studied processes, namely the Simple Symmetric Exclusion Process and the Kipnis-Marchioro-Presutti model for heat conduction, that the distribution of the current out of equilibrium can be deduced from the distribution in equilibrium. Thus, for these two microscopic models, the mapping between the out-of-equilibrium setting and the equilibrium one is exact

    Equilibrium-like fluctuations in some boundary-driven open diffusive systems

    Get PDF
    There exist some boundary-driven open systems with diffusive dynamics whose particle current fluctuations exhibit universal features that belong to the Edwards-Wilkinson universality class. We achieve this result by establishing a mapping, for the system's fluctuations, to an equivalent open --yet equilibrium-- diffusive system. We discuss the possibility of observing dynamic phase transitions using the particle current as a control parameter

    Checkpointing algorithms and fault prediction

    Get PDF
    This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical first-order analysis of Young and Daly in the presence of a fault prediction system, characterized by its recall and its precision. In this framework, we provide an optimal algorithm to decide when to take predictions into account, and we derive the optimal value of the checkpointing period. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale.Comment: Supported in part by ANR Rescue. Published in Journal of Parallel and Distributed Computing. arXiv admin note: text overlap with arXiv:1207.693

    Building a path-integral calculus: a covariant discretization approach

    Full text link
    Path integrals are a central tool when it comes to describing quantum or thermal fluctuations of particles or fields. Their success dates back to Feynman who showed how to use them within the framework of quantum mechanics. Since then, path integrals have pervaded all areas of physics where fluctuation effects, quantum and/or thermal, are of paramount importance. Their appeal is based on the fact that one converts a problem formulated in terms of operators into one of sampling classical paths with a given weight. Path integrals are the mirror image of our conventional Riemann integrals, with functions replacing the real numbers one usually sums over. However, unlike conventional integrals, path integration suffers a serious drawback: in general, one cannot make non-linear changes of variables without committing an error of some sort. Thus, no path-integral based calculus is possible. Here we identify which are the deep mathematical reasons causing this important caveat, and we come up with cures for systems described by one degree of freedom. Our main result is a construction of path integration free of this longstanding problem, through a direct time-discretization procedure.Comment: 22 pages, 2 figures, 1 table. Typos correcte

    Finite size effects in a mean-field kinetically constrained model: dynamical glassiness and quantum criticality

    Full text link
    On the example of a mean-field Fredrickson-Andersen kinetically constrained model, we focus on the known property that equilibrium dynamics take place at a first-order dynamical phase transition point in the space of time-realizations. We investigate the finite-size properties of this first order transition. By discussing and exploiting a mapping of the classical dynamical transition -an argued glassiness signature- to a first-order quantum transition, we show that the quantum analogy can be exploited to extract finite-size properties, which in many respects are similar to those in genuine mean-field quantum systems with a first-order transition. We fully characterize the finite-size properties of the order parameter across the first order transition

    Activity statistics in a colloidal glass former: experimental evidence for a dynamical transition

    Full text link
    In a dense colloidal suspension at a volume fraction slightly lower than that of its glass transition, we follow the trajectories of an assembly of tracers over a large time window. We define a local activity, which quantifies the local tendency of the system to rearrange. We determine the statistics of the time and space integrated activity, and we argue that it develops a low activity tail that comes on a par with the onset of glassy behavior and heterogeneous dynamics. These rare events may be interpreted as the reflection of an underlying dynamic phase transition.Comment: 20 pages, 16 figure

    Impact of fault prediction on checkpointing strategies

    Get PDF
    This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical analysis of Young and Daly in the presence of a fault prediction system, which is characterized by its recall and its precision, and which provides either exact or window-based time predictions. We succeed in deriving the optimal value of the checkpointing period (thereby minimizing the waste of resource usage due to checkpoint overhead) in all scenarios. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. In addition, the results of this analytical evaluation are nicely corroborated by a comprehensive set of simulations, thereby demonstrating the validity of the model and the accuracy of the results.Comment: 20 page

    Scheduling malleable task trees

    Get PDF
    Solving sparse linear systems can lead to processing tree workflows on a platform of processors. In this study, we use the model of malleable tasks motivated in [Prasanna96,Beaumont07] in order to study tree workflow schedules under two contradictory objectives: makespan minimization and memory minization. First, we give a simpler proof of the result of [Prasanna96] which allows to compute a makespan-optimal schedule for tree workflows. Then, we study a more realistic speed-up function and show that the previous schedules are not optimal in this context. Finally, we give complexity results concerning the objective of minimizing both makespan and memory

    A Guide to Algorithm Design: Paradigms, Methods, and Complexity Analysis

    Get PDF
    International audiencePresenting a complementary perspective to standard books on algorithms, A Guide to Algorithm Design: Paradigms, Methods, and Complexity Analysis provides a roadmap for readers to determine the difficulty of an algorithmic problem by finding an optimal solution or proving complexity results. It gives a practical treatment of algorithmic complexity and guides readers in solving algorithmic problems. Divided into three parts, the book offers a comprehensive set of problems with solutions as well as in-depth case studies that demonstrate how to assess the complexity of a new problem. Part I helps readers understand the main design principles and design efficient algorithms. Part II covers polynomial reductions from NP-complete problems and approaches that go beyond NP-completeness. Part III supplies readers with tools and techniques to evaluate problem complexity, including how to determine which instances are polynomial and which are NP-hard. Drawing on the authors' classroom-tested material, this text takes readers step by step through the concepts and methods for analyzing algorithmic complexity. Through many problems and detailed examples, readers can investigate polynomial-time algorithms and NP-completeness and beyond
    corecore